11 research outputs found

    A linear framework for character skinning

    Get PDF
    Character animation is the process of modelling and rendering a mobile character in a virtual world. It has numerous applications both off-line, such as virtual actors in films, and real-time, such as in games and other virtual environments. There are a number of algorithms for determining the appearance of an animated character, with different trade-offs between quality, ease of control, and computational cost. We introduce a new method, animation space, which provides a good balance between the ease-of-use of very simple schemes and the quality of more complex schemes, together with excellent performance. It can also be integrated into a range of existing computer graphics algorithms. Animation space is described by a simple and elegant linear equation. Apart from making it fast and easy to implement, linearity facilitates mathematical analysis. We derive two metrics on the space of vertices (the “animation space”), which indicate the mean and maximum distances between two points on an animated character. We demonstrate the value of these metrics by applying them to the problems of parametrisation, level-of-detail (LOD) and frustum culling. These metrics provide information about the entire range of poses of an animated character, so they are able to produce better results than considering only a single pose of the character, as is commonly done. In order to compute parametrisations, it is necessary to segment the mesh into charts. We apply an existing algorithm based on greedy merging, but use a metric better suited to the problem than the one suggested by the original authors. To combine the parametrisations with level-of-detail, we require the charts to have straight edges. We explored a heuristic approach to straightening the edges produced by the automatic algorithm, but found that manual segmentation produced better results. Animation space is nevertheless beneficial in flattening the segmented charts; we use least squares conformal maps (LSCM), with the Euclidean distance metric replaced by one of our animation-space metrics. The resulting parametrisations have significantly less overall stretch than those computed based on a single pose. Similarly, we adapt appearance preserving simplification (APS), a progressive mesh-based LOD algorithm, to apply to animated characters by replacing the Euclidean metric with an animation-space metric. When using the memoryless form of APS (in which local rather than global error is considered), the use of animation space for computations reduces the geometric errors introduced by LOD decomposition, compared to simplification based on a single pose. User tests, in which users compared video clips of the two, demonstrated a statistically significant preference for the animation-space simplifications, indicating that the visual quality is better as well. While other methods exist to take multiple poses into account, they are based on a sampling of the pose space, and the computational cost scales with the number of samples used. In contrast, our method is analytic and uses samples only to gather statistics. The quality of LOD approximations by improved further by introducing a novel approach to LOD, influence simplification, in which we remove the influences of bones on vertices, and adjust the remaining influences to approximate the original vertex as closely as possible. Once again, we use an animation-space metric to determine the approximation error. By combining influence simplification with the progressive mesh structure, we can obtain further improvements in quality: for some models and at some detail levels, the error is reduced by an order of magnitude relative to a pure progressive mesh. User tests showed that for some models this significantly improves quality, while for others it makes no significant difference. Animation space is a generalisation of skeletal subspace deformation (SSD), a popular method for real-time character animation. This means that there is a large existing base of models that can immediately benefit from the modified algorithms mentioned above. Furthermore, animation space almost entirely eliminates the well-known shortcomings of SSD (the so-called “candy-wrapper” and “collapsing elbow” effects). We show that given a set of sample poses, we can fit an animation-space model to these poses by solving a linear least-squares problem. Finally, we demonstrate that animation space is suitable for real-time rendering, by implementing it, along with level-of-detail rendering, on a PC with a commodity video card. We show that although the extra degrees of freedom make the straightforward approach infeasible for complex models, it is still possible to obtain high performance; in fact, animation space requires fewer basic operations to transform a vertex position than SSD. We also consider two methods of lighting LOD-simplified models using the original normals: tangent-space normal maps, an existing method that is fast to render but does not capture dynamic structures such as wrinkles; and tangent maps, a novel approach that encodes animation-space tangent vectors into textures, and which captures dynamic structures. We compare the methods both for performance and quality, and find that tangent-space normal maps are at least an order of magnitude faster, while user tests failed to show any perceived difference in quality between them

    Genetic selection of parametric scenes

    Get PDF
    Using a modelling package such as Alias Maya or SoftImage XSi to create a natural scene is too tedious to be practical. Procedural generation techniques reduce the amount of work involved, but there may still be too many parameters to be selected manually. We propose a new method of generating natural scenes, using a genetic algorithm (GA) to infer the user’s preferences from user feedback. In order to allow the goal to be reached in a reasonable time, the GA must converge quickly. The scene generation and display preprocessing must also be efficient. We present techniques that attain these goals while still producing reasonable quality output and interactive frame-rates. We also compare this approach to having a user manually select parameters

    MLS reconstruction from noisy point sets

    Get PDF
    For digital preservation of cultural heritage sites in Africa, laser range scanning has been used to produce point clouds. The literature contains extensive work on reconstructing surface models from such point clouds, but often this prior work does not account for artefacts in the data such as vegetation. We have assessed several variations on a specific moving-least-squares (MLS) technique to determine the impact on the quality of the reconstructed surfaces. We found that correct feature size detection and explicit detection of boundaries is important, while a single iteration of almost orthogonal projection is sufficient to give good results

    A Comparison of Linear Skinning Techniques for Character Animation

    Get PDF
    Character animation is the task of moving a complex, artificial character in a life-like manner. A widely used method for character animation involves embedding a simple skeleton within a character model and then animating the character by moving the underlying skeleton. The character's skin is required to move and deform along with the skeleton. Research into this problem has resulted in a number of skinning frameworks. There has, however, been no objective attempt to compare these methods. We compare three linear skinning frameworks that are computationally efficient enough to be used for real-time animation: Skeletal Subspace Deformation, Animation Space and Multi-Weight Enveloping. These create a correspondence between the points on a character's skin and the underlying skeleton by means of a number of weights, with more weights providing greater flexibility. The quality of each of the three frameworks is tested by generating the skins for a number of poses for which the ideal skin is known. These generated skin meshes are then compared to the ideal skins using various mesh comparison techniques and human studies are used to determine the effect of any temporal artefacts introduced. We found that SSD lacks flexibility while Multi-Weight Enveloping is prone to overfitting. Animation Space consistently outperforms the other two frameworks

    Mesh Approximation for Animated Characters

    Get PDF
    A widely used method for character animation involves embedding a simple skeleton with a character model and then animating the character by moving the underlying skeleton. The character’s skin is required to move and deform along with the skeleton. Research into this problem has resulted in a number of different skinning frameworks. There has, however, been no objective attempt to compare these methods. We compare three of the skinning frameworks that are computationally efficient enough to be used for real-time animation. These frameworks are: Skeletal Subspace Deformation, Multi-Weight Enveloping and Animation Space. The performance of the three frameworks is tested by generating the skins for a number of poses for which the ideal skin is known. These generated skin meshes are then compared to the ideal skins using various mesh comparison techniques as well as user comparisons

    Animation space: a truly linear framework for character animation

    Get PDF
    Skeletal subspace deformation (SSD), a simple method of character animation used in many applications, has several shortcomings; the best-known being that joints tend to collapse when bent. We present animation space, a generalization of SSD that greatly reduces these effects and effectively eliminates them for joints that do not have an unusually large range of motion.While other, more expensive generalizations exist, ours is unique in expressing the animation process as a simple linear transformation of the input coordinates. We show that linearity can be used to derive a measure of average distance (across the space of poses), and apply this to improving parametrizations.Linearity also makes it possible to fit a model to a set of examples using least-squares methods. The extra generality in animation space allows for a good fit to realistic data, and overfitting can be controlled to allow fitted models to generalize to new poses. Despite the extra vertex attributes, it is possible to render these animation-space models in hardware with no loss of performance relative to SSD

    Compression of Dense and Regular Point Clouds

    Get PDF
    We present a simple technique for single-rate compression of point clouds sampled from a surface, based on a spanning tree of the points. Unlike previous methods, we predict future vertices using both a linear predictor, which uses the previous edge as a predictor for the current edge, and lateral predictors that rotate the previous edge 90 degrees left or right about an estimated normal. By careful construction of the spanning tree and choice of prediction rules, our method improves upon existing compression rates when applied to regularly sampled point sets, such as those produced by laser range scanning or uniform tesselation of higherorder surfaces. For less regular sets of points, the compression rate is still generally within 1.5 bits per point of other compression algorithms

    Correct normal transformations for articulated models

    Get PDF
    It is well-established that when a matrix is used to transform a rigid object, the normals should be transformed by the inverse transpose of that matrix. However, this is only valid where the transformation matrix is locally constant. This is not the case for models animated with skeletal subspace deformation (SSD), where the transformation matrix is computed for each vertex. We derive a formula for correctly transforming normals on SSD models

    Proof of Field D*'s Case Separation for Arbitrary Simplices

    Get PDF
    In their development of the Field D* algorithm, Ferguson et. al. prove that a path through a unit length right-angled triangle originating from an interpolated edge, and travelling to the opposite vertex must either be a direct or indirect case. A combination of the two is not optimal. Later work, proves this for arbitrary, but non-degenerate triangles. In this technical report, we prove the same for non-degenerate simplices, which are generalisations of triangles to higher dimensions
    corecore